Assignment in intro to data neural computation

Part C&D

By : Michael trushkin

Data

All data Is two dimensional , <x,y> where -1 <= x, y <= 1.
The data is all data points <x, y> where x is of the form m/100 where m is an integer between -100 and +100
and y is of the form n/100 with n an integer between -100 and +100.

suppose that:

<x,y> has value of 1 iff :

1/2 < x^2+y^2 < 3/4

About Part C

Try to traing a Neural network using back propogation, to predict the given function.
show the output of each of the neuron's in the network.

what we will do

About the neural network

we use momentum because from some test's i have made it simply converges faster.

plot some random uniform data

plot uniform data a.k.a where there is 50% bad and good examples.

define neural network train and show results

try predict few cases without training

Our training function that will train the network on some data, in mini batches

Ploting the networks prediction on the full data every 5th iteration

We can clearly see the model is "learning", and generalize well even to the "complete data"

Next i would like to showcase the output of every single neuron in the network

Pard D

Now use the trained neurons from the next to last level of Part 3 as input and only an Adaline for the output.
(That is, you will give the adaline the output of the neurons from Part 3 in the level below the output, and train only the Adaline.)
Describe how accurate the Adaline can be. Give diagrams.

Draw whatever conclusions you think are appropriate from your results.

my prediction

Adaline is the same thing as backprop, and thus would result in very similar results.

Note that adaline inputs must be binary that is 1 or -1.

so we predict out training set and test set,
then we need to take the n-1 layer of the network outputs and convert them to binary
last we use that as the input to the adaline

Conver the Output to be -1 and 1 ( as the output of the neuron )

0.994% Accuracity on the training set!

Plot the adaline neuron prediction on the whole set

as well as calculating the accuracity

Conclusion

as expected Adaline neuron connected to the pred final layer of a trained neural network with accuracity of 97%
can achive the same accuracity!
as expected.

consider a neural network that has a final layer that is simply an identity function
and the layer before that actually gives the final result

if our neuron simply traines to by the identity function ( with it can )
we would have the same output as the original neural network.

that is a neural network where the final layer is an Adaline neuron is the same thing!
the only difference being how we train the model.